146 research outputs found

    Bias for consonantal information over vocalic information in 30-month-olds: cross-linguistic evidence from French and English.

    Get PDF
    Using a name-based categorization task, Nazzi found in 2005 that French-learning 20-month-olds can make use of one-feature consonantal contrasts between new labels but fail to do so with one-feature vocalic contrasts. This asymmetry was interpreted as developmental evidence for the proposal that consonants play a more important role than vowels at the lexical level. In the current study using the same task, we first show that by 30 months French-learning infants can make use of one-feature vocalic contrasts (e.g., /pize/-/pyze/). Second, we show that in a situation where infants must neglect either a consonantal one-feature change or a vocalic one-feature change (e.g., match a /pide/ with either a /tide/ or a /pyde/), both French- and English-learning 30-month-olds choose to neglect the vocalic change rather than the consonantal change. We argue that these results suggest that by 30 months of age, infants still give less weight to vocalic information than to consonantal information in a lexically related task even though they are able to process fine vocalic information

    English-learning one- to two-year-olds do not show a consonant bias in word learning.

    Get PDF
    Following the proposal that consonants are more involved than vowels in coding the lexicon (Nespor, Peña & Mehler, 2003), an early lexical consonant bias was found from age 1;2 in French but an equal sensitivity to consonants and vowels from 1;0 to 2;0 in English. As different tasks were used in French and English, we sought to clarify this ambiguity by using an interactive word-learning study similar to that used in French, with British-English-learning toddlers aged 1;4 and 1;11. Children were taught two CVC labels differing on either a consonant or vowel and tested on their pairing of a third object named with one of the previously taught labels, or part of them. In concert with previous research on British-English toddlers, our results provided no evidence of a general consonant bias. The language-specific mechanisms explaining the differential status for consonants and vowels in lexical development are discussed

    Delayed acquisition of non-adjacent vocalic dependencies

    Get PDF
    The ability to compute non-adjacent regularities is key in the acquisition of a new language. In the domain of phonology/phonotactics, sensitivity to non-adjacent regularities between consonants has been found to appear between 7 and 10 months. The present study focuses on the emergence of a posterior-anterior (PA) bias, a regularity involving two non-adjacent vowels. Experiments 1 and 2 show that a preference for PA over AP (anterior-posterior) words emerges between 10 and 13 months in French-learning infants. Control experiments show that this bias cannot be explained by adjacent or positional preferences. The present study demonstrates that infants become sensitive to non-adjacent vocalic distributional regularities between 10 and 13 months, showing the existence of a delay for the acquisition of non-adjacent vocalic regularities compared to equivalent non-adjacent consonantal regularities. These results are consistent with the CV hypothesis, according to which consonants and vowels play different roles at different linguistic levels

    Speech rhythm: a metaphor?

    Get PDF
    Is speech rhythmic? In the absence of evidence for a traditional view that languages strive to coordinate either syllables or stress-feet with regular time intervals, we consider the alternative that languages exhibit contrastive rhythm subsisting merely in the alternation of stronger and weaker elements. This is initially plausible, particularly for languages with a steep ‘prominence gradient’, i.e. a large disparity between stronger and weaker elements; but we point out that alternation is poorly achieved even by a ‘stress-timed’ language such as English, and, historically, languages have conspicuously failed to adopt simple phonological remedies that would ensure alternation. Languages seem more concerned to allow ‘syntagmatic contrast’ between successive units and to use durational effects to support linguistic functions than to facilitate rhythm. Furthermore, some languages (e.g. Tamil, Korean) lack the lexical prominence which would most straightforwardly underpin prominence alternation. We conclude that speech is not incontestibly rhythmic, and may even be antirhythmic. However, its linguistic structure and patterning allow the metaphorical extension of rhythm in varying degrees and in different ways depending on the language, and that it is this analogical process which allows speech to be matched to external rhythms

    Categorization of regional and foreign accent in 5- to 7-year-old British children

    Get PDF
    This study examines children's ability to detect accent-related information in connected speech. British English children aged 5 and 7 years old were asked to discriminate between their home accent from an Irish accent or a French accent in a sentence categorization task. Using a preliminary accent rating task with adult listeners, it was first verified that the level of accentedness was similar across the two unfamiliar accents. Results showed that whereas the younger children group behaved just above chance level in this task, the 7-year-old group could reliably distinguish between these variations of their own language, but were significantly better at detecting the foreign accent than the regional accent. These results extend and replicate a previous study (Girard, Floccia, & Goslin, 2008) in which it was found that 5-year-old French children could detect a foreign accent better than a regional accent. The factors underlying the relative lack of awareness for a regional accent as opposed to a foreign accent in childhood are discussed, especially the amount of exposure, the learnability of both types of accents, and a possible difference in the amount of vowels versus consonants variability, for which acoustic measures of vowel formants and plosives voice onset time are provided. © 2009 The International Society for the Study of Behavioural Development

    Listeners feel the beat: Entrainment to English and French speech rhythms

    Get PDF
    Can listeners entrain to speech rhythms? Monolingual speakers of English and French and balanced English–French bilinguals tapped along with the beat they perceived in sentences spoken in a stress-timed language, English, and a syllable-timed language, French. All groups of participants tapped more regularly to English than to French utterances. Tapping performance was also influenced by the participants’ native language: English-speaking participants and bilinguals tapped more regularly and at higher metrical levels than did French-speaking participants, suggesting that long-term linguistic experience with a stress-timed language can differentiate speakers’ entrainment to speech rhythm

    Cues for Early Social Skills: Direct Gaze Modulates Newborns' Recognition of Talking Faces

    Get PDF
    Previous studies showed that, from birth, speech and eye gaze are two important cues in guiding early face processing and social cognition. These studies tested the role of each cue independently; however, infants normally perceive speech and eye gaze together. Using a familiarization-test procedure, we first familiarized newborn infants (n = 24) with videos of unfamiliar talking faces with either direct gaze or averted gaze. Newborns were then tested with photographs of the previously seen face and of a new one. The newborns looked longer at the face that previously talked to them, but only in the direct gaze condition. These results highlight the importance of both speech and eye gaze as socio-communicative cues by which infants identify others. They suggest that gaze and infant-directed speech, experienced together, are powerful cues for the development of early social skills

    Host Sexual Dimorphism and Parasite Adaptation

    Get PDF
    Disease expression and prevalence often vary in the different sexes of the host. This is typically attributed to innate differences of the two sexes but specific adaptations by the parasite to one or other host sex may also contribute to these observations

    A Melodic Contour Repeatedly Experienced by Human Near-Term Fetuses Elicits a Profound Cardiac Reaction One Month after Birth

    Get PDF
    Human hearing develops progressively during the last trimester of gestation. Near-term fetuses can discriminate acoustic features, such as frequencies and spectra, and process complex auditory streams. Fetal and neonatal studies show that they can remember frequently recurring sounds. However, existing data can only show retention intervals up to several days after birth.Here we show that auditory memories can last at least six weeks. Experimental fetuses were given precisely controlled exposure to a descending piano melody twice daily during the 35(th), 36(th), and 37(th) weeks of gestation. Six weeks later we assessed the cardiac responses of 25 exposed infants and 25 naive control infants, while in quiet sleep, to the descending melody and to an ascending control piano melody. The melodies had precisely inverse contours, but similar spectra, identical duration, tempo and rhythm, thus, almost identical amplitude envelopes. All infants displayed a significant heart rate change. In exposed infants, the descending melody evoked a cardiac deceleration that was twice larger than the decelerations elicited by the ascending melody and by both melodies in control infants.Thus, 3-weeks of prenatal exposure to a specific melodic contour affects infants 'auditory processing' or perception, i.e., impacts the autonomic nervous system at least six weeks later, when infants are 1-month old. Our results extend the retention interval over which a prenatally acquired memory of a specific sound stream can be observed from 3-4 days to six weeks. The long-term memory for the descending melody is interpreted in terms of enduring neurophysiological tuning and its significance for the developmental psychobiology of attention and perception, including early speech perception, is discussed
    corecore